10th World Congress in Probability and Statistics

Contributed Session (live Q&A at Track 2, 9:30PM KST)

Contributed 02

Financial Mathematics and Probabilistic Modeling

Conference
9:30 PM — 10:00 PM KST
Local
Jul 20 Tue, 5:30 AM — 6:00 AM PDT

Solving the selection-recombination equation: ancestral lines and duality

Frederic Alberti (Bielefeld University)

4
The selection-recombination equation is a high-dimensional, nonlinear system of ordinary differential equations, which describe the evolution of the genetic type composition of a population under selection and recombination, in a law of large numbers regime. So far, explicit solutions have seemed out of reach; only in the special case of three loci, with selection acting on one of them, has an approximate solution been found, but without an obvious path to generalisation.
We consider the case of an arbitrary number of neutral loci, linked to a single selected locus. In this setting, we investigate how the (random) genealogical structure of the problem can be succinctly encoded by a novel `ancestral initiation graph', and how it gives rise to a recursive integral representation of the solution with a clear, probabilistic interpretation.

References:

-F. Alberti and E. Baake, Solving the selection-recombination equation: Ancestral lines under selection and recombination, https://arxiv.org/abs/2003.06831

-F. Alberti, E. Baake and C. Herrmann, Selection, recombination, and the ancestral initiation graph, https://arxiv.org/abs/2101.10080

Short time asymptotics for modulated rough stochastic volatility models

Barbara Pacchiarotti (Università degli studi di Roma "Tor Vergata")

2
In this paper, we establish a small time large deviation principle for log-price processes when the volatility is a function of a modulated Volterra process. With modulated process we mean a Volterra process with a self similar kernel multiplied by a slowly varying function. We also deduce short time asymptotics for implied volatility and for pricing.

How to detect a salami slicer: a stochastic controller-stopper game with unknown competition

Kristoffer Lindensjö (Stockholm University)

3

Q&A for Contributed Session 02

0
This talk does not have an abstract.

Session Chair

Hyungbin Park (Seoul National University)

Contributed 07

SDEs and Fractional Brownian Motions

Conference
9:30 PM — 10:00 PM KST
Local
Jul 20 Tue, 5:30 AM — 6:00 AM PDT

Weak rough-path type solutions for singular Lévy SDEs

Helena Katharina Kremp (Freie Universität Berlin)

4
Since the works by Delarue, Diel and Cannizzaro, Chouk (in the Brownian noise setting), and our previous work, the existence and uniqueness of solutions to the martingale problem associated to multidimensional SDEs with additive $\alpha$-stable Lévy noise for $\alpha$ in (1,2] and rough Besov drift of regularity $\alpha$ in ((2-2 $\alpha$)/3,0] is known. Motivated by the equivalence of probabilistic weak solutions to SDEs with bounded, measurable drift and solutions to the martingale problem, we define a (non-canonical) weak solution concept for singular Lévy diffusions, proving moreover equivalence to the martingale solution in both the Young (i.e. $\alpha > (1-\alpha)/2$), as well as in the rough regime (i.e. $\alpha>(2-2\alpha)/3$). This turns out to be highly non-trivial in the rough case and forces us to define certain rough stochastic sewing integrals involved. In particular, we show that the canonical weak solution concept (introduced also by Athreya, Butkovsky, Mytnik in the Young case), which is well-posed in the Young case, yields non-uniqueness of solutions in the rough case.

Functional limit theorems for approximating irregular SDEs, general diffusions and their exit times

Mikhail Urusov (University of Duisburg-Essen)

4
We propose a new approach for approximating one-dimensional continuous Markov processes in law. More specifically, we discuss the following results:
(1) A functional limit theorem (FLT) for weak approximation of the paths of arbitrary continuous Markov processes;
(2) An FLT for weak approximation of the paths and exit times.
The second FLT has a stronger conclusion but requires a stronger assumption, which is essential. We propose a new scheme, called EMCEL, which satisfies the assumption of the second FLT and thus allows to approximate every one-dimensional continuous Markov process together with its exit times. The approach is illustrated by a couple of examples with peculiar behavior, including an irregular SDE, for which the corresponding Euler scheme does not converge even weakly, a sticky Brownian motion and a Brownian motion slowed down on the Cantor set.

This is a joint work with Stefan Ankirchner and Thomas Kruse.

Q&A for Contributed Session 07

0
This talk does not have an abstract.

Session Chair

Ildoo Kim (Korea University)

Contributed 28

Neural Networks and Deep Learning

Conference
9:30 PM — 10:00 PM KST
Local
Jul 20 Tue, 5:30 AM — 6:00 AM PDT

Simulated Annealing-Backpropagation Algorithm on Parallel Trained Maxout Networks (SABPMAX) in detecting credit card fraud

Sheila Mae Golingay (University of the Philippines-Diliman)

5
Based on the Backpropagation (BP) artificial neural network algorithm, this study introduces the idea of combining Simulated Annealing (SA), a global searching algorithm and then proposes a new neural network algorithm: Simulated Annealing-Backpropagation Algorithm on Parallel Trained Maxout Networks (SABPMAX) algorithm. The proposed algorithm can improve the numerical stability and evaluation measures in detecting credit card fraud. It makes use of the global searching capability of SA and the precise local searching element of the backpropagation algorithm to improve the initial weights of the network towards improving detection of credit card fraud. Several models were made and tested using different fraud distributions. Furthermore, separate applications of BP algorithm and SABPMAX algorithm were compared. Numerical results show a higher accuracy rate, higher sensitivity, shorter computing time, and overall better performance of the SABPMAX algorithm.

The smoking gun: statistical theory improves neural network estimates

Sophie Langer (Technische Universität Darmstadt)

5
In this talk we analyze the $L_2$ error of neural network regression estimates with one hidden layer. Under the assumption that the Fourier transform of the regression function decays suitably fast, we show that an estimate, where all initial weights are chosen according to proper uniform distributions and where the weights are learned by gradient descent, achieves a rate of convergence of $1/\sqrt{n}$ (up to a logarithmic factor). Our statistical analysis implies that the key aspect behind this result is the proper choice of the initial inner weights and the adjustment of the outer weights via gradient descent. This indicates that we can also simply use linear least squares to choose the outer weights. We prove a corresponding theoretical result and compare our new linear least squares neural network estimate with standard neural network estimates via simulated data. Our simulations show that our theoretical considerations lead to an estimate with an improved performance. Hence the development of statistical theory can indeed improve neural network estimates.

Stochastic block model for multiple networks

Tabea Rebafka (Sorbonne Université)

5
A model-based approach for the analysis of a collection of observed networks is considered. We propose to fit a stochastic block model to the data. The novelty consists in the analysis of not a single, but multiple networks. The major challenge resides in the development of a computationally efficient algorithm. Our method is an agglomerative algorithm based on the integrated classification likelihood criterion that performs simultaneously model selection and node clustering. Compared to the single-network context, an additional difficulty resides in the necessity to compare networks one to another and aggregate partial solutions. We propose a distance measure to compare stochastic block models and solve the label switching problem among graphs in a computationally efficient way.

Deep neural networks for faster nonparametric regression models

Mehmet Ali Kaygusuz (The Middle East Technical University)

5
Deep neural networks have been an attention in recent years due to it has huge success in applicational areas such as signal progressing, biological networks, and time series analysis. Schmidt-Hieber (2020) suggested Feedforward neural networks for Generalized additive models (GAMs) with sparsity and Re-Lu function. However, the over-parametrized problem can be challenging when the number of parameters exceeds the number of samples which is studied by Bauer and Kohler (2019). Therefore, we use the bootstrap methods to cope with this problem since bootstrap methods (Efron, 1979) are computationally faster and reduced the variance. Specifically, we propose the Smooth bootstrap method (Sen et al. (2010)) which can be more appropriate for nonparametric regression while capturing the nonlinearity and the interaction between variables, resulting in better performance in bias-variance trade-off. In this study, when we combine bootstrap with multilayer neural network together with GAM’s approaches, we also aim to optimize the model selection in GAMs via distinct model selection criteria, namely, consistent Akaike information criterion with Fisher matrix and information complexity (Bozdogan, 1987). We evaluate the performance of all suggested models in different dimensional protein-protein interaction network datasets and biomedical signal data in terms of various accuarcy measures.
[1] Bauer, B and Kohler,M, “On deep learning as a remedy for the curse of dimensionality in nonparametric regression”, The Annals of Statistics, 47(4), 2019, 2261-2285.
[2] Efron,B, "Bootstrap methods: another look at the jackknife" the Annals of Statistics,7(1):1-26,1979
[3] Hamparsum Bozdogan. “Model selection and Akaike’s information criterion (AIC): The general theory and its analytical extensions”. In: Psychometrika 52.3 (1987), pp. 345–370.
[4] Sen,B, Banerjee, M and Woodroofe,M., “In-cosistency of bootstrap: The Grenander estimator ”, The Annals of Statistics,38(4),2010,1953-1977.
[5] Schmidt-Hieber, J., “Nonparametric regression using deep neural networks with ReLu activation function”, The Annals of Statistics, 48(4), 2020, 1875-1897.

Generative model for fbm with deep ReLU neural networks

Michael Allouche (Ecole Polytechnique)

5
Over the last few years, a new paradigm of generative models based on neural networks have shown impressive results to simulate – with high fidelity – objects in high-dimension, while being fast in the simulation phase. In this work, we focus on the simulation of continuous-time processes (infinite dimensional objects) based on Generative Adversarial Networks (GANs) setting. More precisely, we focus on fractional Brownian motion, which is a centered Gaussian process with specific covariance function. Since its stochastic simulation is known to be quite delicate, having at hand a generative model for full path is really appealing for practical use. However, designing the architecture of such neural networks models is a very difficult question and therefore often left to empirical search. We provide a high confidence bound on the uniform approximation of fractional Brownian motion $(B^H(t):t\in[0,1])$ with Hurst parameter $H$, by a deep-feedforward ReLU neural network fed with a $N$-dimensional Gaussian vector, with bounds on the network construction (number of hidden layers and total number of neurons). Our analysis relies, in the standard Brownian motion case ($H=1/2$), on the Levy construction of $B^H$ and in the general fractional Brownian motion case ($ H \neq 1/2 $), on the Lemarié-Meyer wavelet representation of $B^H$. This work gives theoretical support to use, and guidelines to construct, new generative models based on neural networks for simulating stochastic processes. It may well open the way to handle more complicated stochastic models written as a Stochastic Differential Equation driven by fractional Brownian motion.

Q&A for Contributed Session 28

0
This talk does not have an abstract.

Session Chair

Jong-June Jeon (University of Seoul)

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.